oalib

OALib Journal期刊

ISSN: 2333-9721

费用:99美元

投稿

时间不限

( 2673 )

( 2672 )

( 2023 )

( 2022 )

自定义范围…

匹配条件: “Varadhan Ravi” ,找到相关结果约3146条。
列表显示的所有文章,均可免费获取
第1页/共3146条
每页显示
BB: An R Package for Solving a Large System of Nonlinear Equations and for Optimizing a High-Dimensional Nonlinear Objective Function
Ravi Varadhan,Paul Gilbert
Journal of Statistical Software , 2009,
Abstract: We discuss R package BB, in particular, its capabilities for solving a nonlinear system of equations. The function BBsolve in BB can be used for this purpose. We demonstrate the utility of these functions for solving: (a) large systems of nonlinear equations, (b) smooth, nonlinear estimating equations in statistical modeling, and (c) non-smooth estimating equations arising in rank-based regression modeling of censored failure time data. The function BBoptim can be used to solve smooth, box-constrained optimization problems. A main strength of BB is that, due to its low memory and storage requirements, it is ideally suited for solving high-dimensional problems with thousands of variables.
Unifying Optimization Algorithms to Aid Software System Users: optimx for R
John C. Nash,Ravi Varadhan
Journal of Statistical Software , 2011,
Abstract: R users can often solve optimization tasks easily using the tools in the optim function in the stats package provided by default on R installations. However, there are many other optimization and nonlinear modelling tools in R or in easily installed add-on packages. These present users with a bewildering array of choices. optimx is a wrapper to consolidate many of these choices for the optimization of functions that are mostly smooth with parameters at most bounds-constrained. We attempt to provide some diagnostic information about the function, its scaling and parameter bounds, and the solution characteristics. optimx runs a battery of methods on a given problem, thus facilitating comparative studies of optimization algorithms for the problem at hand. optimx can also be a useful pedagogical tool for demonstrating the strengths and pitfalls of different classes of optimization approaches including Newton, gradient, and derivative-free methods.
A framework for organizing and selecting quantitative approaches for benefit-harm assessment
Puhan Milo A,Singh Sonal,Weiss Carlos O,Varadhan Ravi
BMC Medical Research Methodology , 2012, DOI: 10.1186/1471-2288-12-173
Abstract: Background Several quantitative approaches for benefit-harm assessment of health care interventions exist but it is unclear how the approaches differ. Our aim was to review existing quantitative approaches for benefit-harm assessment and to develop an organizing framework that clarifies differences and aids selection of quantitative approaches for a particular benefit-harm assessment. Methods We performed a review of the literature to identify quantitative approaches for benefit-harm assessment. Our team, consisting of clinicians, epidemiologists, and statisticians, discussed the approaches and identified their key characteristics. We developed a framework that helps investigators select quantitative approaches for benefit-harm assessment that are appropriate for a particular decisionmaking context. Results Our framework for selecting quantitative approaches requires a concise definition of the treatment comparison and population of interest, identification of key benefit and harm outcomes, and determination of the need for a measure that puts all outcomes on a single scale (which we call a benefit and harm comparison metric). We identified 16 quantitative approaches for benefit-harm assessment. These approaches can be categorized into those that consider single or multiple key benefit and harm outcomes, and those that use a benefit-harm comparison metric or not. Most approaches use aggregate data and can be used in the context of single studies or systematic reviews. Although the majority of approaches provides a benefit and harm comparison metric, only four approaches provide measures of uncertainty around the benefit and harm comparison metric (such as a 95 percent confidence interval). None of the approaches considers the actual joint distribution of benefit and harm outcomes, but one approach considers competing risks when calculating profile-specific event rates. Nine approaches explicitly allow incorporating patient preferences. Conclusion The choice of quantitative approaches depends on the specific question and goal of the benefit-harm assessment as well as on the nature and availability of data. In some situations, investigators may identify only one appropriate approach. In situations where the question and available data justify more than one approach, investigators may want to use multiple approaches and compare the consistency of results. When more evidence on relative advantages of approaches accumulates from such comparisons, it will be possible to make more specific recommendations on the choice of approaches.
Support of personalized medicine through risk-stratified treatment recommendations - an environmental scan of clinical practice guidelines
Yu Tsung,Vollenweider Daniela,Varadhan Ravi,Li Tianjing
BMC Medicine , 2013, DOI: 10.1186/1741-7015-11-7
Abstract: Background Risk-stratified treatment recommendations facilitate treatment decision-making that balances patient-specific risks and preferences. It is unclear if and how such recommendations are developed in clinical practice guidelines (CPGs). Our aim was to assess if and how CPGs develop risk-stratified treatment recommendations for the prevention or treatment of common chronic diseases. Methods We searched the United States National Guideline Clearinghouse for US, Canadian and National Institute for Health and Clinical Excellence (United Kingdom) CPGs for heart disease, stroke, cancer, chronic obstructive pulmonary disease and diabetes that make risk-stratified treatment recommendations. We included only those CPGs that made risk-stratified treatment recommendations based on risk assessment tools. Two reviewers independently identified CPGs and extracted information on recommended risk assessment tools; type of evidence about treatment benefits and harms; methods for linking risk estimates to treatment evidence and for developing treatment thresholds; and consideration of patient preferences. Results We identified 20 CPGs that made risk-stratified treatment recommendations out of 133 CPGs that made any type of treatment recommendations for the chronic diseases considered in this study. Of the included 20 CPGs, 16 (80%) used evidence about treatment benefits from randomized controlled trials, meta-analyses or other guidelines, and the source of evidence was unclear in the remaining four (20%) CPGs. Nine CPGs (45%) used evidence on harms from randomized controlled trials or observational studies, while 11 CPGs (55%) did not clearly refer to harms. Nine CPGs (45%) explained how risk prediction and evidence about treatments effects were linked (for example, applying estimates of relative risk reductions to absolute risks), but only one CPG (5%) assessed benefit and harm quantitatively and three CPGs (15%) explicitly reported consideration of patient preferences. Conclusions Only a small proportion of CPGs for chronic diseases make risk-stratified treatment recommendations with a focus on heart disease and stroke prevention, diabetes and breast cancer. For most CPGs it is unclear how risk-stratified treatment recommendations were developed. As a consequence, it is uncertain if CPGs support patients and physicians in finding an acceptable benefit- harm balance that reflects both profile-specific outcome risks and preferences.
Elevated Serum Carboxymethyl-Lysine, an Advanced Glycation End Product, Predicts Severe Walking Disability in Older Women: The Women's Health and Aging Study I
Kai Sun,Richard D. Semba,Linda P. Fried,Debra A. Schaumberg,Luigi Ferrucci,Ravi Varadhan
Journal of Aging Research , 2012, DOI: 10.1155/2012/586385
Abstract: Advanced glycation end products (AGEs) have been implicated in the pathogenesis of sarcopenia. Our aim was to characterize the relationship between serum carboxymethyl-lysine (CML), a major circulating AGE, and incident severe walking disability (inability to walk or walking speed < 0 . 4 ?m/sec) over 30 months of followup in 394 moderately to severely disabled women, ≥ 6 5 years, living in the community in Baltimore, Maryland (the Women’s Health and Aging Study I). During followup, 154 (26.4%) women developed severe walking disability, and 23 women died. Women in the highest quartile of serum CML had increased risk of developing of severe walking disability in a multivariate Cox proportional hazards model, adjusting for age and other potential confounders. Women with elevated serum CML are at an increased risk of developing severe walking disability. AGEs are a potentially modifiable risk factor. Further work is needed to establish a causal relationship between AGEs and walking disability. 1. Introduction Mobility difficulties are common among older adults and are associated with poor quality of life [1], increased need for care, and are predictive of death [2–4]. Understanding the processes that lead to disability is important in order to develop strategies to prevent or delay disability in older adults. Lifestyle factors that may influence the pathway to disability include diet. Diet has been incompletely characterized in relation to the development of disability. Recent studies suggest that advanced glycation end products (AGEs), which are active biomolecules formed by the non-enzymatic covalent binding of sugars with proteins and other molecules, may be related to muscle strength and physical performance [5, 6]. The western diet is high in AGEs, which are formed in high concentrations in foods that are prepared at high temperatures. Thus, some foods are considered an important exogenous source of AGEs. AGEs are thought to be absorbed in the process of digestion, circulate in the blood, and can be deposited in different organs and tissues [7]. Sarcopenia, or loss of muscle strength and muscle mass, is an important factor underlying mobility difficulties such as slow walking speed in older adults [8]. Older adults have increased cross-linking of collagen and deposition of AGEs in skeletal muscle [9]. In aging animals, cross-linking of collagen is associated with increased muscle stiffness, reduced muscle function [10, 11], and accumulation of AGEs [12]. AGEs may also play a role in sarcopenia through upregulation of inflammation and endothelial
Special invited paper. Large deviations
S. R. S. Varadhan
Mathematics , 2008, DOI: 10.1214/07-AOP348
Abstract: This paper is based on Wald Lectures given at the annual meeting of the IMS in Minneapolis during August 2005. It is a survey of the theory of large deviations.
Random walks in a random environment
S R S Varadhan
Mathematics , 2005,
Abstract: Random walks as well as diffusions in random media are considered. Methods are developed that allow one to establish large deviation results for both the `quenched' and the `averaged' case.
Fighting Uncertainty with Uncertainty: A Baby Step  [PDF]
Ravi Kashyap
Theoretical Economics Letters (TEL) , 2017, DOI: 10.4236/tel.2017.75097
Abstract: We can overcome uncertainty with uncertainty. Using randomness in our choices and in what we control, and hence in the decision making process, we could potentially offset the uncertainty inherent in the environment and yield better outcomes. The example we develop in greater detail is the news-vendor inventory management problem with demand uncertainty. We briefly discuss areas, where such an approach might be helpful, with the common prescription, “Don’t Simply Optimize, Also Randomize; perhaps best described by the termRandoptimization”. 1) News-Vendor Inventory Management; 2) School Admissions; 3) Journal Submissions; 4) Job Candidate Selection; 5) Stock Picking; 6) Monetary Policy. This methodology is suitable for the social sciences since the primary source of uncertainty is the members of the system themselves and presently, no methods are known to fully determine the outcomes in such an environment, which perhaps would require being able to read the minds of everyone involved and to anticipate their actions continuously. Admittedly, we are not qualified to recommend whether such an approach is conducive for the natural sciences, unless perhaps, bounds can be established on the levels of uncertainty in a system and it is shown conclusively that a better understanding of the system and hence improved decision making will not alter the outcomes.
Nonconventional limit theorems in discrete and continuous time via martingales
Yuri Kifer,S. R. S. Varadhan
Mathematics , 2010, DOI: 10.1214/12-AOP796
Abstract: We obtain functional central limit theorems for both discrete time expressions of the form $1/\sqrt{N}\sum_{n=1}^{[Nt]}(F(X(q_1(n)),\ldots, X(q_{\ell}(n)))-\bar{F})$ and similar expressions in the continuous time where the sum is replaced by an integral. Here $X(n),n\geq0$ is a sufficiently fast mixing vector process with some moment conditions and stationarity properties, $F$ is a continuous function with polynomial growth and certain regularity properties, $\bar{F}=\int F\,d(\mu\times\cdots\times\mu)$, $\mu$ is the distribution of $X(0)$ and $q_i(n)=in$ for $i\le k\leq\ell$ while for $i>k$ they are positive functions taking on integer values on integers with some growth conditions which are satisfied, for instance, when $q_i$'s are polynomials of increasing degrees. These results decisively generalize [Probab. Theory Related Fields 148 (2010) 71-106], whose method was only applicable to the case $k=2$ under substantially more restrictive moment and mixing conditions and which could not be extended to convergence of processes and to the corresponding continuous time case. As in [Probab. Theory Related Fields 148 (2010) 71-106], our results hold true when $X_i(n)=T^nf_i$, where $T$ is a mixing subshift of finite type, a hyperbolic diffeomorphism or an expanding transformation taken with a Gibbs invariant measure, as well as in the case when $X_i(n)=f_i({\Upsilon }_n)$, where ${\Upsilon }_n$ is a Markov chain satisfying the Doeblin condition considered as a stationary process with respect to its invariant measure. Moreover, our relaxed mixing conditions yield applications to other types of dynamical systems and Markov processes, for instance, where a spectral gap can be established. The continuous time version holds true when, for instance, $X_i(t)=f_i(\xi_t)$, where $\xi_t$ is a nondegenerate continuous time Markov chain with a finite state space or a nondegenerate diffusion on a compact manifold. A partial motivation for such limit theorems is due to a series of papers dealing with nonconventional ergodic averages.
Brownian Occupation Measures, Compactness and Large Deviations
Chiranjib Mukherjee,S. R. S. Varadhan
Mathematics , 2014,
Abstract: In proving large deviation estimates, the lower bound for open sets and upper bound for compact sets are essentially local estimates. On the other hand, the upper bound for closed sets is global and compactness of space or an exponential tightness estimate is needed to establish it. In dealing with the occupation measure $L_t(A)=\frac{1}{t}\int_0^t{\1}_A(W_s) \d s$ of the $d$ dimensional Brownian motion, which is not positive recurrent, there is no possibility of exponential tightness. The space of probability distributions $\mathcal {M}_1(\R^d)$ can be compactified by replacing the usual topology of weak convergence by the vague toplogy, where the space is treated as the dual of continuous functions with compact support. This is essentially the one point compactification of $\R^d$ by adding a point at $\infty$ that results in the compactification of $\mathcal M_1(\R^d)$ by allowing some mass to escape to the point at $\infty$. If one were to use only test functions that are continuous and vanish at $\infty$ then the compactification results in the space of sub-probability distributions $\mathcal {M}_{\le 1}(\R^d)$ by ignoring the mass at $\infty$. The main drawback of this compactification is that it ignores the underlying translation invariance. More explicitly, we may be interested in the space of equivalence classes of orbits $\widetilde{\mathcal M}_1=\widetilde{\mathcal M}_1(\R^d)$ under the action of the translation group $\R^d$ on $\mathcal M_1(\R^d)$. There are problems for which it is natural to compactify this space of orbits. We will provide such a compactification, prove a large deviation principle there and give an application to a relevant problem.
第1页/共3146条
每页显示


Home
Copyright © 2008-2020 Open Access Library. All rights reserved.